Simultaneous Control and Human Feedback in the Training of a Robotic Agent with Actor-Critic Reinforcement Learning
نویسندگان
چکیده
This paper contributes a preliminary report on the advantages and disadvantages of incorporating simultaneous human control and feedback signals in the training of a reinforcement learning robotic agent. While robotic human-machine interfaces have become increasingly complex in both form and function, control remains challenging for users. This has resulted in an increasing gap between user control approaches and the number of robotic actu-ators which can be controlled. One way to address this gap is to shift autonomy to the robot. Semi-autonomous actions of the robotic agent can then be shaped and refined by human feedback, simplifying user control. Most prior work on human-agent shaping has incorporated training with feedback , or has included indirect control signals. By contrast, in this paper we explore how a human can provide concurrent feedback signals and real-time myoelectric control signals to train a robot's actor-critic reinforcement learning control system. Using both a physical and a simulated robotic system, we compare training performance on a simple movement task when reward is derived from the environment , from the human, and from the combination of the two. Our results indicate that benefit may be gained by including human generated feedback in learning algorithms for this complex human machine interactive domain.
منابع مشابه
1 Supervised Actor - Critic Reinforcement Learning
Editor’s Summary: Chapter ?? introduced policy gradients as a way to improve on stochastic search of the policy space when learning. This chapter presents supervised actor-critic reinforcement learning as another method for improving the effectiveness of learning. With this approach, a supervisor adds structure to a learning problem and supervised learning makes that structure part of an actor-...
متن کاملLearning to Play Donkey Kong Using Neural Networks and Reinforcement Learning
Neural networks and reinforcement learning have successfully been applied to various games, such as Ms. Pacman and Go. We combine multilayer perceptrons and a class of reinforcement learning algorithms known as actor-critic to learn to play the arcade classic Donkey Kong. Two neural networks are used in this study: the actor and the critic. The actor learns to select the best action given the g...
متن کاملActor-Critic Reinforcement Learning with Simultaneous Human Control and Feedback
This paper contributes a first study into how different human users deliver simultaneous control and feedback signals during human-robot interaction. As part of this work, we formalize and present a general interactive learning framework for online cooperation between humans and reinforcement learning agents. In many humanmachine interaction settings, there is a growing gap between the degrees-...
متن کاملReinforcement Learning for Learning Rate Control
Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. We propose an algorithm to automatically learn lear...
متن کاملAn Actor-critic Algorithm for Learning Rate Learning
Stochastic gradient descent (SGD), which updates the model parameters by adding a local gradient times a learning rate at each step, is widely used in model training of machine learning algorithms such as neural networks. It is observed that the models trained by SGD are sensitive to learning rates and good learning rates are problem specific. To avoid manually searching of learning rates, whic...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1606.06979 شماره
صفحات -
تاریخ انتشار 2016